10 research outputs found

    A comparative study of two automated workgroup composition strategies

    Full text link
    [EN] Nowadays, the professional environment of leading companies requires multidisciplinary teams to be created, including both internal and external experts, to adequately face the challenges of a fast-evolving and complex market. For newcomers, this situation can be difficult to handle if no previous experiences allowed preparing them for those situations. This is where college education finds its place, and in fact the curricula of different university careers are being updated to include more transversal competences like leadership and cooperation skills within groups. However, such efforts still remain at an amateur level in most cases due to lack of specific expertise by the majority of instructors, due to limitations in terms of creating actual multidisciplinary groups, and due to the little resources available to maximize the benefits of such experiences. Students also show little involvement in this issue, typically opting for approaches that minimize their efforts. Thus, simple and yet effective strategies that can be helpful for instructors to make a meaningful change to the current status quo are necessary. To address this need, in this paper we present the results of a 2-year study where students were forced to team up with other partners based on the results of a computer networking skills-ranking exam. In the first experiment, groups where formed by students achieving a similar performance (homogeneous), while in the second experiment groups were formed so that the average score of group members was the same (mostly heterogeneous). Through detailed survey results taken by students at the end of both courses, we find that, compared to an alternative group assignment strategy promoting group heterogeneity, having partners with similar skills does not help at improving the coordination between group members nor students perception about the usefulness of the experience towards future jobs. In fact, their rating of the overall experience was mostly the same, and, to our surprise, the partner expertise on the course topics was not a motivating factor, being that students complaints were always about not being able to decide about their partner. Thus, we consider that a simple strategy to define groups (e.g. randomly) suffices, but it should be adopted on all courses since the first year of the degree.Tavares De Araujo Cesariny Calafate, CM.; Tornell, SM.; Arlandis, J. (2016). A comparative study of two automated workgroup composition strategies. En INTED2016 Proceedings. IATED Digital Library. 5890-5899. doi:10.21125/inted.2016.0409S5890589

    Breaking persistent working group partnerships: a social experiment

    Full text link
    Facing multidisciplinary projects is becoming quite common in companies worldwide, meaning that experts from a specific area must team up with experts from other areas in a dynamic, ad hoc manner. For a professional to meet such requirements successfully, it is important that teamwork skills are developed during college. However, such issue is usually not addressed thoroughly, and most students end up teaming with the same partners over and over again, thereby failing to achieve the critical adaptability skills expected from them. To address this drawback, in this paper we present the results of a study where students were forced to team up with other partners based on the results of a computer networking skills-ranking exam. Experimental results confirm the repeating pattern in terms of past partnerships, and student resistance to partner changes. On the positive side, results show that having new partners indeed help at achieving a more even task distribution, and that students are moderately aware of the upcoming challenges in their future professional activity, recognizing the benefits of teaming up with new people.This work was partially supported by the School of Informatics (ETSINF) and the Department of Computer Engineering (DISCA) at the Universitat Politècnica de València.Tavares De Araujo Cesariny Calafate, CM.; Arlandis, J.; Torres Cortes, A. (2015). Breaking persistent working group partnerships: a social experiment. En INTED2015 Proceedings. IATED. 1329-1337. http://hdl.handle.net/10251/70447S1329133

    Batch-adaptive rejection threshold estimation with application to OCR post-processing

    Full text link
    An OCR process is often followed by the application of a language model to find the best transformation of an OCR hypothesis into a string compatible with the constraints of the document, field or item under consideration. The cost of this transformation can be taken as a confidence value and compared to a threshold to decide if a string is accepted as correct or rejected in order to satisfy the need for bounding the error rate of the system. Widespread tools like ROC, precision-recall, or error-reject curves, are commonly used along with fixed thresholding in order to achieve that goal. However, those methodologies fail when a test sample has a confidence distribution that differs from the one of the sample used to train the system, which is a very frequent case in post-processed OCR strings (e.g., string batches showing particularly careful handwriting styles in contrast to free styles). In this paper, we propose an adaptive method for the automatic estimation of the rejection threshold that overcomes this drawback, allowing the operator to define an expected error rate within the set of accepted (non-rejected) strings of a complete batch of documents (as opposed to trying to establish or control the probability of error of a single string), regardless of its confidence distribution. The operator (expert) is assumed to know the error rate that can be acceptable to the user of the resulting data. The proposed system transforms that knowledge into a suitable rejection threshold. The approach is based on the estimation of an expected error vs. transformation cost distribution. First, a model predicting the probability of a cost to arise from an erroneously transcribed string is computed from a sample of supervised OCR hypotheses. Then, given a test sample, a cumulative error vs. cost curve is computed and used to automatically set the appropriate threshold that meets the user-defined error rate on the overall sample. The results of experiments on batches coming from different writing styles show very accurate error rate estimations where fixed thresholding clearly fails. An original procedure to generate distorted strings from a given language is also proposed and tested, which allows the use of the presented method in tasks where no real supervised OCR hypotheses are available to train the system.Navarro Cerdan, JR.; Arlandis Navarro, JF.; Llobet Azpitarte, R.; Perez-Cortes, J. (2015). Batch-adaptive rejection threshold estimation with application to OCR post-processing. Expert Systems with Applications. 42(21):8111-8122. doi:10.1016/j.eswa.2015.06.022S81118122422

    Composition of Constraint, Hypothesis and Error Models to improve interaction in Human-Machine Interfaces

    Full text link
    We use Weighted Finite-State Transducers (WFSTs) to represent the different sources of information available: the initial hypotheses, the possible errors, the constraints imposed by the task (interaction language) and the user input. The fusion of these models to find the most probable output string can be performed efficiently by using carefully selected transducer operations. The proposed system initially suggests an output based on the set of hypotheses, possible errors and Constraint Models. Then, if human intervention is needed, a multimodal approach, where the user input is combined with the aforementioned models, is applied to produce, with a minimum user effort, the desired output. This approach offers the practical advantages of a de-coupled model (e.g. input-system + parameterized rules + post-processor), keeping at the same time the error-recovery power of an integrated approach, where all the steps of the process are performed in the same formal machine (as in a typical HMM in speech recognition) to avoid that an error at a given step remains unrecoverable in the subsequent steps. After a presentation of the theoretical basis of the proposed multi-source information system, its application to two real world problems, as an example of the possibilities of this architecture, is addressed. The experimental results obtained demonstrate that a significant user effort can be saved when using the proposed procedure. A simple demonstration, to better understand and evaluate the proposed system, is available on the web https://demos.iti.upv.es/hi/. (C) 2015 Elsevier B.V. All rights reserved.Navarro Cerdan, JR.; Llobet Azpitarte, R.; Arlandis, J.; Perez-Cortes, J. (2016). Composition of Constraint, Hypothesis and Error Models to improve interaction in Human-Machine Interfaces. Information Fusion. 29:1-13. doi:10.1016/j.inffus.2015.09.001S1132

    Handwritten Character Recognition using the Continuos Distance Transformation

    No full text
    In this paper, a feature extraction method for images is presented. It is based on the classical concept of Distance Transformation (DT) from which we develop a generalization: the Continuos Distance Transformation (CDT). Whereas the DT can only be applied to binary images, the CDT can be applied to both binary and gray-scale or color pictures. Furthermore, we define a number of new metrics and dissimilarity measures based on the CDT

    Stochastic Error-Correcting Parsing for OCR Post-processing.

    No full text
    In this paper, stochastic error-correcting parsing is proposed as a powerful and flexible method to post-process the results of an optical character recognizer (OCR). Deterministic and non-deterministic approaches are possible under the proposed setting. The basic units of the model can be words or complete sentences, and the lexicons or the language databases can be simple enumerations or may convey probabilistic information from the application domain. 1 Introduction The result of automatic optical recognition of printed or handwritten text is often affected by a considerable amount of error and uncertainty, and it is therefore essential the application of a correction algorithm. A significant portion of the ability of humans to read a handwritten text is due to their extraordinary error recovery power, thanks to the lexical, syntactic, semantic, pragmatic and discursive language constraints they apply. Among the different levels at which language can be modeled [2], the lowest one..

    Habitability, the scale of sustainability

    No full text
    This paper will explore an alternative to so-called "sustainable‟ models and strategies currently applied in the field of building, architecture and urbanism. In front of irrational resource consumption and an ever-growing waste generation or other problems, seemingly inherent to the current industrial productive model and now transferred to the production of space, the most critical and concerned sectors within these disciplines keep on applying scale-segregated sustainable solutions, i.e. working and intervening at the scale of the single built unit, or at that of the urban model. Instead, the paper will explain ongoing research related to the possibilities of generating another model based in the concept of "global habitability", that would allow the application of those and other new solutions and mechanisms at all scales in a much more holistic approach to the implementation of sustainability: working transversally and simultaneously, from the room to the city. If current strategies aim at an increase in efficiency exclusively based in the reduction of resource consumption and waste generation, the new model would propose a redefinition of the other term intervening, namely utility. The very subject of sustainability is changed here through this redefinition; no more space but activity, no more the object but the process. Utility and use within architecture can be identified with habitability, here understood as the achievement of adequate social and environmental conditions in order to satisfy the socially acknowledged basic needs of people. Two different factors would determine such idea of utility: on the one hand the conditions of "matter‟, as an expression of requirements related to space, resource flows and equipment needed to develop an activity; and on the other hand, the conditions of "orgware‟ or "privacy‟, another term that would include synergy – as the relation between the level of individuality and the level of collectivity - and management, as a combination of time, control and legislation. The main aim of the paper will be thus to present this reformulation of the idea of "habitability‟ as the only effective strategy towards an implementation of sustainability in the field of building. A systemic intervention, re-thinking the utility of architecture from the smallest spatial unit (the room) and extending its scale to that of the urban services (i.e. providers of any need that can't be fulfilled within the dwelling), allows achieving the maximum efficiency in terms of resource consumption; whereas social focus, incorporating individual, collective and organizational demands, allows the strategy to take roots in society expanding, thus, the likelihood of its success.Peer Reviewe
    corecore